perm filename WRITA[B2,JMC]2 blob sn#766011 filedate 1984-08-22 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00005 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	% temporary tex definitions
C00003 00003	\chapter{WRITING RECURSIVE FUNCTION DEFINITIONS}
C00005 00004	\section{Static and dynamic ways of programming.}
C00012 00005	\section{Recursive definition of functions on natural numbers.}
C00034 ENDMK
C⊗;
% temporary tex definitions
\def\beginlisp#1\endlisp{\relax}
\newcount\funcount\funcount=0
\def\beginfundef{$$\advance\funcount by 1\leqno{\the\funcount.}}
\def\endfundef{$$}
\def\beginfundef{%
  \advance\funcount by 1   
   $$\hbox to \displaywidth\bgroup
    \rm (\number\funcount)\hfil
   $\vcenter\bgroup\openup\jot\ialign\bgroup &$\displaystyle ##$\hfil\crcr
}
\def\endfundef{\crcr\egroup\egroup$\hfil\egroup$$}
\chapter{WRITING RECURSIVE FUNCTION DEFINITIONS}
\chaplab{WRITIN}


	In \chapref{READIN} we discussed the basic constructs of 
\lisp\ and explained how \lisp\ evaluates terms built up from them.  
The notion of recursively defined function was introduced and the rules for 
computing the value of a recursively defined function were given.  
In addition we showed how \lisp\ programs are represented as S-expressions and 
how these programs are interpreted by the function /eval/.   
By now you should be able to read and understand simple \lisp\ programs.  
The next step is learning
to write \lisp\ programs.  In principle you already know all that is necessary;
however, there are some basic ideas and  techniques which are 
useful in solving \lisp\ programming problems.  The purpose of
this chapter is to help you learn to think recursively and to familiarize
you with some of the basic techniques and
standard forms of recursive programs.  The final section contains 
\lisp\ programming problems.


\section{Static and dynamic ways of programming.}
\sectlab{tdvsbu}
	In order to write recursive function definitions, one must
think about programming differently than is customary when writing
programs in languages like FORTRAN or ALGOL or in machine language.
In these languages, one has in mind the state of the computation as
represented by the values of certain variables or locations in the
memory of the machine, and then one writes statements or machine
instructions in order to make the state change in an appropriate way.
Writing recursive function definitions requires a different approach.
Namely, one thinks about the value of the function, asks for what
values of the arguments the value of the function is immediate, and,
given arbitrary values of the arguments, for what simpler
arguments must the function be known in order to give the value of
the function for the given arguments.  

	Consider the numerical function $n!\,$.  For
what argument is the value of the function immediate?  Clearly, for
$n = 0$ or $n = 1$, the value is immediately seen to be 1.  Moreover,
we can get the value for an arbitrary $n$ if we know the value for
$n-1$.  Also, we see that knowing the value for $n = 1$ is redundant,
since it can be obtained from the $n = 0$ case by the same rule as
gets it for a general $n$ from the value for $n-1$.
This leads to the simple recursive definition:
\beginfundef
\funlab{fcnfact}
n! ← \qif n=0 \qthen 1 \qelse n\times[n-1]!\,.
\endfundef

	We may regard this as a static way of looking at programming.
We ask what simpler cases the general case of our function depends on
rather than how we build up the desired state of the computation.

	An example of the dynamic approach to programming is
the following obvious ALGOL 60 program for computing $n!$:
\beginfundef
\funlab{fcnfactorial}
\vbox{
\ialign{$#$\hfill&$#$\hfill\cr
{\bf integer\ procedure\ }/factorial/(/n/); {\bf integer}\ s,i;\hidewidth\cr
{\bf begin}\cr
&/s/ := 1;\cr
&/i/ := /n/;\cr
/loop:/&\qif/i/= 0 \qthen \qgo /done/;\cr
&/s/ := /i/\ast/s/;\cr
&/i/ := /i/-1;\cr
&\qgo /loop/;\cr
/done/:&/factorial/:=/s/;\cr
{\bf end};\cr
}
}\endfundef
	One often hears that static ways are bad and dynamic are good,
but in many cases the the LISP style program is shorter and
clearer.
This style also provides a built in mechanism of problem
solving which is rather like what is usually called {\it subgoaling}.  We shall
also see later how it also leads to rather natural methods of proving
statements about programs.

	 Actually, when we discuss the mechanism of
recursion, it will turn out that the above LISP program for factorial
is inefficient
because it uses
the pushdown mechanism unnecessarily and should be replaced
by the following somewhat longer program that corresponds to the
above ALGOL program rather precisely:
\beginfundef
\funlab{fcnfact2}
\vbox{
\hbox{$/n/! ← /fact/[/n/,1],$}
\hbox{$/fact/[/i/,/s/] ← \qif /i/=0\qthen/s/\qelse/fact/[/i/-1, i\ast/s/].$}
}\endfundef

	Perhaps the distinction between the two styles
is equivalent to what some people call
the distinction between /top-down\// and /bottom-up\// programming.
LISP offers both, but the static style is better developed in LISP than
in other languages, and we will emphasize it.

\section{Recursive definition of functions on natural numbers.}
\sectlab{numrec}

	In the next several sections we examine various forms of recursive
function definition and programs having such forms.
We begin by considering recursive definitions of numerical functions.
We have already seen one example, namely the factorial function.
The basic idea is to give a rule for computing the value of the function
for a particular value of the argument (input)
in terms of some collection of given (defined or built in) functions and values of 
the function for smaller arguments.   Notice that we will have to
specify the value for 0 directly as there are no smaller numbers.
The method of function definition is parallel to the description of the
domain of natural numbers in terms of the number 0 and the operation 
of successor, namely a number is either 0 or obtained 
by applying successor to a previously constructed number.

	Recursive functions of natural numbers have the the subject of much study
by logicians and mathematicians.  One fairly nice result is that any
such function can be computed starting with the constant 0, the
basic functions {\it add1} (more commonly known as 
{\it successor}) {\it sub1} (also known 
as {\it predecessor}) and the tools
for recursive definition described in Chapter \chapref{readin}.
(Actually one can get by with less, but that is not the point of this discussion.)  
For example the sum and difference of two numbers are given by
\beginfundef
\funlab{fcnplus}
\funlab{fcndiffer}
\vbox{
\hbox{\hskip0\unit $/plus/[/n/, /m\//] ← \qif  /n/ \qequal 0 \qthen  /m/ 
\qelse  /add1/ /sub1/ /n/ + /m/$}
\vskip1ex
\hbox{\hskip0\unit $/differ/[/n/, /m\//] ← $}
\hbox{\hskip4\unit $    \qif  /n/ \qequal 0 \qthen  0$}
\hbox{\hskip4\unit $    \qelsif  /m/ \qequal 0 \qthen  /n/$}
\hbox{\hskip4\unit $    \qelse  /differ/[/sub1/ /n/, /sub1/ /m\//]$}
}\endfundef
while the predicate {\it greaterp} which is true if the first argument is
greater that the second can be computed by
\beginfundef
\funlab{fcngreaterp}
\vbox{
\hbox{\hskip0\unit $/greaterp/[/n/, /m\//] ← $}
\hbox{\hskip4\unit $    \qif  /n/ \qequal 0 \qthen  0 \qelsif  /m/ \qequal 
0 \qthen  1 \qelse  /sub1/ /n/ > /sub1/ /m/$}
}
\endfundef

Here we use 0 to represent \qF\ and 1 to represent \qT\ in order to
keep within the domain of numbers.  We also write $n+1$ instead of {\it add1 n}
and $n-1$ instead of {\it sub1 n}.

	We could continue along these lines using {\it plus} to define {\it times},
{\it times} to define {\it exp}, {\it differ} to define 
{\it quot} and {\it rem}, and so forth
building up a collection of useful functions.  We will leave this as an exercise
for the reader.  The point is that at each stage the recursive definition
is expressed in terms of given functions in a very simple form.
Perhaps the simplest such form is the following schema:
\beginfundef
\funlab{eqnpr1}
\vcenter{
\hbox{\hskip0\unit $/fn/ /n/ ← \qif  /n/ \qequal 0 \qthen  /a/ \qelse 
 /h/[/n/ - 1, /fn/ /n/ - 1]$}
}
\endfundef

Here {\it f\/} is defined in terms of a fixed constant {\it a} and a given function 
{\it h}. 
This corresponds to what logicians call
``primitive recursion without parameters''.  If we
take  $h[k,m]=[k+1] \qapp m$ and $a = 1$ we have the definition of {\it fact\/}
given above \funref{fact1}.

If we allow parameters to be carried along we get the usual form of
primitive recursion (shown with only one parameter for simplicity)
\beginfundef
\funlab{eqnpr2}
\vbox{
\hbox{\hskip0\unit $/f/[/n/, /m\//] ← $}
\hbox{\hskip4\unit $    \qif  /n/ \qequal 0 \qthen  /g/ /m/ \qelse  /h/[/n/ 
- 1, /m/, /f/[/n/ - 1, /m\//]]$}
}
\endfundef

The definition of {\it plus\/} given above has this form with $g[m]=m$ and
$h[n,m,k]=k+1$.   As a further generalization we allow the 
parametric argument of /f\// (in the recursive call)
to be a given function of the parameter and the first argument giving
\beginfundef
\funlab{eqnpr3}
\vbox{
\hbox{\hskip0\unit $/f/[/n/, /m\//] ← $}
\hbox{\hskip4\unit $    \qif  /n/ \qequal 0 \qthen  /g/ /m/$}
\hbox{\hskip4\unit $    \qelse  /h/[/n/ - 1, /m/, /f/[/n/ - 1, /j/[/n/ 
- 1, /m\//]]]$}
}
\endfundef
The definitions of /differ\// and /greaterp\// given above has this form.  Also the 
alternate recursive definition of the factorial function [\funref{fact2}]
is of this form.
We may wish to express the computation directly in terms of several
preceding values instead of just $f[n-1]$.   One form of this
``course of values'' type recursion is given by
\beginfundef
\funlab{eqnpr4}
\vbox{
\hbox{$/f/[/n\/\//] ← \qif /n/ \qeq 0 \qthen a↓0 \qelse$}
\hbox{\hskip4\unit$\qif /n/ \qeq 1 \qthen a↓1 \qelse$}
\hbox{\hskip4\unit$\cdots$}
\hbox{\hskip4\unit$\qif /n/ \qeq /b/ \qthen a↓b \qelse$}
\hbox{\hskip6\unit$/h/[/n/,/f/[p↓1[n]],\ldots,f[p↓c[n]]]$}
}
\endfundef
 where $0≤p↓i[n]<n$ for $1≤i≤c$ when $b<n$.  As an example we have the
definition of the function giving the $n$th Fibonacci number:
\beginfundef
\funlab{fcnfib}
\vbox{
\hbox{\hskip0\unit $/fib/ /n/ ← $}
\hbox{\hskip4\unit $    \qif  /n/ \qequal 0 \qthen  0$}
\hbox{\hskip4\unit $    \qelsif  /n/ \qequal 1 \qthen  1$}
\hbox{\hskip4\unit $    \qelse  /fib/ /sub1/ /n/ + /fib/ /sub1/ /sub1/ 
/n/$}
}
\endfundef
We note that this is a particularly unfavorable case for recursively defined 
functions as the evaluation from this definition takes order of $fib\>n$ amount of 
work for input of $n$.  A more efficient definition is:
\beginfundef
\funlab{fcnfiba}
\vbox{
\hbox{\hskip0\unit $/fiba/ /n/ ← /fibb/[/n/, 0, 1]$}
\vskip1ex
\hbox{\hskip0\unit $/fibb/[/n/, /k/, /m\//] ← $}
\hbox{\hskip4\unit $    \qif  /n/ \qequal 0 \qthen  /k/ \qelse  /fibb/[/sub1/ 
/n/, /m/, /k/ + /m\//]$}
}
\endfundef
Evaluation of the $n$th Fibonacci number using this definition requires
only order of $n$ amount of work.

	The above forms of recursive function definition all have a very nice 
property.  Namely, assuming that the given functions appearing in the schemas 
are total, the function defined by the schema is total.  That is the rules given for
evaluation of such functions will always produce an answer in a finite number of
steps.  The general form of recursive definition
\beginfundef
\funlab{eqnprec}
f[n↓0,\ldots,n↓k] ← \tau[f,n↓0,\ldots,n↓k]
\endfundef
where \tau\ is some term involving /f/, the arguments $n↓i$ and perhaps additional
given functions.   Definitions of this general form are not guaranteed to
define total functions.   For example
\beginfundef
\funlab{fcnloop}
/loop/ /n/←/loop//n/
\endfundef
is a perfectly good definition, however the rules for computation will
not produce an answer for any value of /n/.  There are of course many cases
where the function defined by a recursive definition is total, but where
the definition does not fit any of the above patterns.  

	The standard example of a total function on natural numbers that
is not definable by primitive recursion is known as Ackermann's function and
is defined by
\beginfundef
\funlab{fcnack}
\vbox{
\hbox{\hskip0\unit $/ack/[/m/, /n\//] ← $}
\hbox{\hskip4\unit $    \qif  /m/ \qequal 0 \qthen  /add1/ /n/$}
\hbox{\hskip4\unit $    \qelsif  /n/ \qequal 0 \qthen  /ack/[/sub1/ /m/, 
1]$}
\hbox{\hskip4\unit $    \qelse  /ack/[/sub1/ /m/, /ack/[/m/, /sub1/ /n\//]]$}
}
\endfundef

The proof is based on showing that $/ack/[m,n]$ grows faster than any
primitive recursive function.